我们介绍了在视频中发现时间精确,细粒度事件的任务(检测到时间事件的精确时刻)。精确的斑点需要模型在全球范围内对全日制动作规模进行推理,并在本地识别微妙的框架外观和运动差异,以识别这些动作过程中事件的识别。令人惊讶的是,我们发现,最高的绩效解决方案可用于先前的视频理解任务,例如操作检测和细分,不能同时满足这两个要求。作为响应,我们提出了E2E点,这是一种紧凑的端到端模型,在精确的发现任务上表现良好,可以在单个GPU上快速培训。我们证明,E2E点的表现明显优于最近根据视频动作检测,细分和将文献发现到精确的发现任务的基线。最后,我们为几个细粒度的运动动作数据集贡献了新的注释和分裂,以使这些数据集适用于未来的精确发现工作。
translated by 谷歌翻译
单视图重建的方法通常依赖于观点注释,剪影,缺乏背景,同一实例的多个视图,模板形状或对称性。我们通过明确利用不同对象实例的图像之间的一致性来避免所有此类监督和假设。结果,我们的方法可以从描述相同对象类别的大量未标记图像中学习。我们的主要贡献是利用跨境一致性的两种方法:(i)渐进式调理,一种培训策略,以逐步将模型从类别中逐步专业为课程学习方式进行实例; (ii)邻居重建,具有相似形状或纹理的实例之间的损失。对于我们方法的成功也至关重要的是:我们的结构化自动编码体系结构将图像分解为显式形状,纹理,姿势和背景;差异渲染的适应性公式;以及一个新的优化方案在3D和姿势学习之间交替。我们将我们的方法(独角兽)在多样化的合成造型数据集上进行比较,这是需要多种视图作为监督的方法的经典基准 - 以及标准的实数基准(Pascal3d+ Car,Cub,Cub,Cub,Cub),大多数方法都需要已知的模板和Silhouette注释。我们还展示了对更具挑战性的现实收藏集(Compcars,LSUN)的适用性,在该收藏中,剪影不可用,图像没有在物体周围裁剪。
translated by 谷歌翻译
字体遍布文档普遍存在,有各种风格。它们以本机向量格式表示或光栅化以产生固定分辨率图像。在第一种情况下,非标准表示可防止受益于最新网络架构进行神经表示;虽然在后一种情况下,在通过网络编码时,光栅化表示导致数据保真度丢失,作为像边缘和角落的字体特定的不连续性难以使用神经网络代表。基于观察到复杂字体可以通过一组更简单的占用函数的叠加来表示,我们介绍\ texit {multi-inclicicits}以将字体表示为置换不变集的学习隐含功能,而不会丢失特征(例如,棱角)。然而,虽然多种含义本地保护字体特征,但以地面真理多通道信号的形式获得监控是本身的问题。相反,我们提出了如何只用本地监督培训这种表示,而建议的神经架构直接发现字体系列的全球一致的多型多种含义。我们广泛地评估了各种任务的建议代表,包括重建,插值和综合,以证明具有现有替代品的明显优势。另外,表示自然地启用字形完成,其中单个特征字体用于在目标样式中综合整个字体系列。
translated by 谷歌翻译
我们介绍了特点,这是一个可以在给定角色的少数样本(8-15)上培训的生成模型。我们的模型基于Keypoint位置生成新颖姿势,可以在提供交互式反馈的同时实时修改,允许直观的重复和动画。由于我们只有非常有限的培训样本,因此关键挑战之一是如何解决(DIS)闭塞,例如,当一只手在身体后面或前面移动时。为了解决这个问题,我们介绍了一种新颖的分层方法,该方法将输入关键点分割为独立处理的不同层。这些层代表了角色的不同部分,并提供了强烈的隐含偏差,这有助于获得逼真的结果,即使具有强(DIS)闭塞。要组合各个图层的功能,我们使用所有关键点上的自适应缩放方法。最后,我们引入了一个掩模连接约束,以减少在测试时间下具有极端分布的失真伪像。我们表明我们的方法优于最近的基线,为不同的人物创造了现实的动画。我们还表明,我们的模型可以处理离散状态更改,例如左侧或右面的配置文件,即不同的图层确实学习了这些层中各个关键点的特定特征,并且在更多数据时,我们的模型将缩放到更大的数据集可用的。
translated by 谷歌翻译
Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
Agile robotics presents a difficult challenge with robots moving at high speeds requiring precise and low-latency sensing and control. Creating agile motion that accomplishes the task at hand while being safe to execute is a key requirement for agile robots to gain human trust. This requires designing new approaches that are flexible and maintain knowledge over world constraints. In this paper, we consider the problem of building a flexible and adaptive controller for a challenging agile mobile manipulation task of hitting ground strokes on a wheelchair tennis robot. We propose and evaluate an extension to work done on learning striking behaviors using a probabilistic movement primitive (ProMP) framework by (1) demonstrating the safe execution of learned primitives on an agile mobile manipulator setup, and (2) proposing an online primitive refinement procedure that utilizes evaluative feedback from humans on the executed trajectories.
translated by 谷歌翻译
Curating datasets for object segmentation is a difficult task. With the advent of large-scale pre-trained generative models, conditional image generation has been given a significant boost in result quality and ease of use. In this paper, we present a novel method that enables the generation of general foreground-background segmentation models from simple textual descriptions, without requiring segmentation labels. We leverage and explore pre-trained latent diffusion models, to automatically generate weak segmentation masks for concepts and objects. The masks are then used to fine-tune the diffusion model on an inpainting task, which enables fine-grained removal of the object, while at the same time providing a synthetic foreground and background dataset. We demonstrate that using this method beats previous methods in both discriminative and generative performance and closes the gap with fully supervised training while requiring no pixel-wise object labels. We show results on the task of segmenting four different objects (humans, dogs, cars, birds).
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译